Last Update: 2025/3/26
OpenAI Completion API
The OpenAI Completion API allows you to generate text completions using OpenAI's language models. This document provides an overview of the API endpoints, request parameters, and response structure.
Endpoint
POST https://platform.llmprovider.ai/v1/completions
Request Headers
Header | Value |
---|---|
Authorization | Bearer YOUR_API_KEY |
Content-Type | application/json |
Request Body
The request body should be a JSON object with the following parameters:
Parameter | Type | Description |
---|---|---|
model | string | The model to use (e.g., gpt-3.5-turbo-instruct ). |
prompt | string | The prompt to generate completions for. |
max_tokens | integer | (Optional) The maximum number of tokens that can be generated in the completion. |
temperature | number | (Optional) Sampling temperature, between 0 and 2. |
top_p | number | (Optional) Nucleus sampling probability, between 0 and 1. |
n | integer | (Optional) Number of completions to generate for each prompt. |
stop | array | (Optional) Up to 4 sequences where the API will stop generating further tokens. |
presence_penalty | number | (Optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far. |
frequency_penalty | number | (Optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far. |
Example Request
{
"model": "gpt-3.5-turbo-instruct",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0
}
Response Body
The response body will be a JSON object containing the generated completions and other metadata.
Field | Type | Description |
---|---|---|
id | string | Unique identifier for the completion. |
object | string | The type of object returned, usually text_completion . |
created | integer | Timestamp of when the completion was created. |
model | string | The model used for the completion. |
choices | array | A list of generated completion choices. |
choices[].finish_reason | string | The reason why the completion ended (e.g., stop , length ). |
choices[].index | integer | The index of the choice in the returned list. |
choices[].logprobs | object | (Optional) Log probabilities of the tokens in the completion. |
choices[].text | string | The generated text for the completion. |
usage | object | Token usage statistics for the request. |
Example Response
{
"id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
"object": "text_completion",
"created": 1589478378,
"model": "gpt-3.5-turbo-instruct",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}
Example Requests
- Shell
- Node.js
- Python
curl -X POST https://platform.llmprovider.ai/v1/completions \
-H "Authorization: Bearer $YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo-instruct",
"prompt": "Hello, world!",
"max_tokens": 50
}'
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const url = 'https://platform.llmprovider.ai/v1/completions';
const data = {
model: 'gpt-3.5-turbo-instruct',
prompt: 'Hello, world!',
max_tokens: 50
};
const headers = {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
};
axios.post(url, data, {headers})
.then(response => {
console.log('Response:', response.data);
})
.catch(error => {
console.error('Error:', error);
});
import requests
import json
api_key = 'YOUR_API_KEY'
url = 'https://platform.llmprovider.ai/v1/completions'
headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}
data = {
'model': 'gpt-3.5-turbo-instruct',
'prompt': 'Hello, world!',
'max_tokens': 50
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
print('Response:', response.json())
else:
print('Error:', response.status_code, response.text)
For more details, refer to the OpenAI API documentation.